Complex numbers are an extension of the real numbers that include a new element, , such that .
This allows us to solve equations like .
A complex number is written as , where and are real numbers.
The real part of is denoted by and the imaginary part of is denoted by .
Complex numbers are typically represented in the complex plane, where the real part is the -axis and the imaginary part is the -axis.
The other form of a complex number is the polar form, , where is the magnitude of and is the argument of .
The magnitude of a complex number is given by and the argument is given by .
is a shorthand for , which is known as Euler's formula.
It comes from the fact that fundamentally represents circular motion as it satisfies the differential equation , and is a quarter turn in the complex plane.
(You can also prove Euler's formula using the Taylor series of , , and .)
Below is a quick table of the properties and operations of complex numbers in both Cartesian and polar form.
Cartesian Form
Polar Form
Real Part
Imaginary Part
Magnitude
Argument
Addition
Multiplication
Division
Conjugate
Inverse
Power
It is clear that some operations are easier in Cartesian form and some are easier in polar form.
For example, multiplication and division are easier in polar form, while addition and subtraction are easier in Cartesian form.
Below are more properties of complex numbers, this time independent of the form:
A complex function is a function that takes a complex number as an input and returns a complex number as an output.
For example, is a complex function.
The domain of a complex function is the set of complex numbers for which the function is defined.
Generally, we write , where and are real functions of the real variables and .
The derivative of a complex function can be defined in the same way as the derivative of a real function;
However, because is two-dimensional, we can approach in any direction.
This means that there is ambiguity in the definition of the derivative of a complex function.
For example, if we approach from the real axis (-axis), we get:
Conversely, if we approach from the imaginary axis (-axis), we get:
To resolve this ambiguity, we assert that both limits must be equal, which gives us the Cauchy-Riemann equations:
If these equations are satisfied, then the function is said to be holomorphic or analytic.
This means that the function is differentiable at every point in its domain.
Theorem: All finite polynomials are analytic in the entire complex plane.
Proof. A polynomial is a sum of terms of the form .
This proof can be done easily, so we will skip it. The main steps are:
It is easy to see that is analytic because .
The sum of two analytic functions is analytic, and the product of two analytic functions is analytic.
All finite polynomials can be written as a sum of terms of the form , which in turn can be written as a product of terms of the form .
Therefore, all finite polynomials are analytic.
Note that this does not easily generalize to infinite polynomials because the set of polynomials is not complete.
In other words, the limit of a (Cauchy) sequence of polynomials may not be a polynomial—for example, the limit of the sequence is , which is not a polynomial.
The following theorem is very powerful and useful in complex analysis:
Theorem: If a function is written such that it only contains but not , then is analytic in the region where it is defined.
Proof. To prove this theorem, we need to show that an analytic function does not depend on the conjugate of .
In other words, if is analytic, then .
Using the multivariable chain rule, we can write:
Since , then and .
Additionally, since and , then and .
Substituting these into the equation above, we get:
From Equation , we know that , so the left term becomes .
From Equation , we know that , so the right term becomes .
Therefore, .
This theorem is very useful because it allows us to determine if a function is analytic by checking if it contains .
If a function is analytic, then it can be expanded as a Taylor series around a point :
And if a function is analytic at a point , this point is called a regular point.
If a function is not analytic at a point, then it is called a singular point.
The integral of a complex function is defined in the same way as the integral of a real function.
However, because the complex plane is two-dimensional, the integral can be taken along any path in the complex plane.
This means that the integral of a complex function requires more information than just the endpoints of the path.
However, there is another property of analytic functions that makes complex integration easier.
Theorem: If a function is analytic in a region , then the integral of is path-independent in .
Proof. Recall from vector calculus that there are many ways to state that a vector field is conservative:
The vector field is the gradient of a scalar field; .
The line integral of the vector field is path-independent; .
The curl of the vector field is zero; .
The third statement is the most general and can be applied to complex functions.
Suppose we integrate a complex function along a closed curve in the complex plane, where the endpoints of the curve are and .
Then the integral is given by:
We borrow the notation from vector calculus and write and .
Then the integral becomes:
Next, we apply Stokes's theorem to the integrals above:
(Although curl is technically defined in three dimensions, we can still use a version of it in two dimensions, where we take everything we know about curl in three dimensions and remove the -component.)
The curls are explicitly given by:
They are zero from the Cauchy-Riemann equations (Equations and ).
As we stated earlier, if the curl of a vector field is zero, then the line integral is path-independent.
Therefore, both integrals in are path-independent.
Hence, the integral of an analytic function is path-independent.
This theorem is very useful because it allows us to evaluate complex integrals by choosing the path that is easiest to integrate over.
Corollary (Cauchy—Goursat Theorem): If a function is analytic in a region and is a closed curve in , then .
This theorem is a direct consequence of the path-independence of the integral of an analytic function.
Just choose the path to be the same curve .
Cauchy's integral formula is a powerful result in complex analysis. It states:
where is analytic in a region that contains the closed curve and is a singular point inside .
We traverse the curve in the counterclockwise direction.
To prove this formula, we leverage the path-independence of the integral of an analytic function.
We shall use a different path that is easier to integrate over.
Namely, we traverse normally, but at a certain point, we make a small detour and go towards until we reach a small circle around .
Then we traverse in the clockwise direction and return to the original path.
In practice, we are replacing the curve with the curve , and we take the limit as .
The green connectors are the detours we made, and they are infinitesimally close to each other, thus they cancel each other out.
The key is that while we are taking the limit as , we are not actually letting be zero.
Hence we are never touching the singular point , and so the function is analytic in the region we are integrating over.
This means that the integral over the larger curve is zero (since it is closed) and we are left with the integral over the small circle :
Note that while is closed and its integral is zero, as we let , the integral over is not necessarily zero because we reach the singular point .
Thus we cannot just say that the integral over is zero.
Since is a circle, we parametrize it as:
and the limit we are taking is .
The bounds of the integral are to .
The integral becomes:
But as , the integrand becomes , and so the integral becomes .
Therefore, dividing both sides by gives us Cauchy's integral formula:
Recall that a Taylor series is a power series expansion of a function around a point.
It is the same for complex functions:
where .
By the Cauchy integral formula, we can write the th derivative of as:
Thus the th coefficient in the series expansion is:
If we let be negative as well, we get the Laurent series:
The difference between the two series lie in the regions of convergence.
The Taylor series converges in a disk around , while the Laurent series converges in an annulus around .
This is one of the most important results in complex analysis for physics.
We begin with an analytic function that is analytic in a region except for a singular point at .
First, expand as a Laurent series around :
If it is possible to write as:
Then has a simple pole at . A pole is a point where the function goes to infinity.
It is clear that at , the function goes to infinity because of the term.
A higher-order pole is called a pole of order , and it has the form:
Now consider a closed curve in 's radius of convergence that encloses .
Then the integral of over is:
Since the term on the right is a polynomial, it is analytic and its integral is zero.
Thus the integral is just the integral of the simple pole term:
where we have used Cauchy's integral formula.
The term is called the residue of at , and it is denoted by .
If we take the original expression in Equation and multiply both sides by , we get:
If we take the limit as , the right side becomes because all the other terms vanish.
Therefore we say:
In other words, the residue is a measure of how much and "compete" with each other at (one goes to zero and the other goes to infinity).
Now suppose has multiple simple poles at .
Then it can be written as:
The residue theorem is very powerful and can be used to evaluate integrals that are difficult to evaluate by other means.
For example, consider the integral:
This integral is difficult to evaluate with the typical Calculus 2 methods.
However, we can evaluate it using the residue theorem.
First, we let the variable we're integrating over be complex.
Label the integrand as:
It can be shown that is analytic in the entire complex plane (it is an entire function) because its power series converges everywhere.
The rest of the integrand, , has simple poles at and .
This is because at these points, the denominator goes to zero so the fraction goes to infinity.
Next, we split the integrand into partial fractions:
Now we consider which contour we integrate over.
We choose the contour to be the semicircle of radius in the upper half-plane, and we let .
This contour is nice because the integrand is an exponential function, and it decays as we go up in the complex plane; decays as .
This is more concretely shown if we expand as :
So there are two parts to the integrand— simply oscillates as changes, and decays as increases.
This means that the integrand decays as we go up in the complex plane.
Thus eventually, when we let the radius go to infinity, the integral over the semicircle goes to zero and we're left with the integral over the real line.